#engineering services database
Explore tagged Tumblr posts
Text
MQTT Protocol and Security in Flutter Application
Introduction: MQTT (Message Queuing Telemetry Transport) is a lightweight publish/subscribe messaging protocol that is ideal for Internet of Things (IoT) applications. It is designed to be lightweight and efficient, making it ideal for devices with limited resources. MQTT is also secure, supporting both username/password authentication and TLS/SSL encryption.
Flutter is a cross-platform mobile development framework that allows you to create native apps for Android, iOS, web, and desktop from a single codebase. Flutter is also known for its performance and its ability to create beautiful, fluid user interfaces.
In this blog post, we will discuss how to use MQTT in a Flutter application. We will also cover the security implications of using MQTT and how to secure your Flutter application.
Using MQTT in Flutter
The Flutter MQTT client library provides a simple way to connect to an MQTT broker and publish and subscribe to topics. The library is easy to use and can be integrated into your Flutter application with just a few lines of code.
To use the Flutter MQTT client library, you first need to install it. You can do this by running the following command in your Flutter project:
flutter pub add mqtt_
Once the library is installed, you can create a new MQTT client object and connect to the broker. The following code shows how to do this:
import ‘package:mqtt_client/mqtt_client.dart’;
void main() {
// Create an MQTT client object.
final client = MqttClient(‘mqtt://localhost:1883’);
// Connect the client to the broker.
client.connect();
// Subscribe to a topic.
client.subscribe(‘topic’);
// Publish a message.
client.publish(‘topic’, ‘Hello, world!’);
}
This code will connect to the MQTT broker at mqtt://localhost:1883 and subscribe to the topic hello. It will then publish a message with the value world to the topic hello.
Securing MQTT in Flutter
The mqtt_client package also provides a number of other features, such as:
Handling messages that are published to a topic.
Disconnecting from the broker.
Reconnecting to the broker if the connection is lost.
Security in MQTT
The MQTT protocol is a secure protocol by default. However, there are a number of security best practices that should be followed to further secure the communication between the client and the broker.
One of the most important security best practices is to use a secure connection between the client and the broker. This can be done by using Transport Layer Security (TLS) or Secure Sockets Layer (SSL).
Another important security best practice is to use strong passwords for the client and the broker. The passwords should be at least 12 characters long and should contain a mix of uppercase and lowercase letters, numbers, and symbols.
Finally, it is important to keep the MQTT client and broker software up to date. This will ensure that they are protected from known security vulnerabilities.
Here are some additional security considerations for MQTT in Flutter applications:
Use strong passwords and keep them secret.
Use TLS to encrypt all data that is sent over the network.
Restrict access to the MQTT broker to authorized users.
Monitor the MQTT traffic for signs of malicious activity.
Conclusion
MQTT is a powerful and versatile protocol that can be used to create a variety of IoT applications. Flutter is a modern and efficient cross-platform development framework that can be used to create beautiful and user-friendly apps. By combining MQTT and Flutter, you can create powerful and secure IoT applications that can run on any device.
I hope this blog post has given you a better understanding of how to use MQTT in a Flutter application. If you have any questions, please feel free to leave a comment below. If you would like any help with regards to web application development, please get in touch with us today!
#Data Engineering & User Experience Design Thinking data engineering services#services digital engineering#digital engineering services#services digital operations#engineering services database
0 notes
Text
Databricks vs. Snowflake: Key Differences Explained

What if businesses could overcome the challenges of data silos, slow query performance, and limited real-time analytics? Well, it's a reality now, as data cloud platforms like Databricks and Snowflake have transformed how organizations manage and analyze their data.
Founded in 2012, Snowflake emerged from the expertise of data warehousing professionals, establishing itself as a SQL-centric solution for modern data needs. In contrast, Databricks, launched shortly after in 2013, originated from the creators of Apache Spark, positioning itself as a managed service for big data processing and machine learning.

Scroll ahead to discover everything about these platforms and opt for the best option.
Benefits of Databricks and Snowflake
Here are the benefits that you can enjoy with Databricks:
It has been tailored for data science and machine learning workloads.
It supports complex data transformations and real-time analytics.
It adapts to the needs of data engineers and scientists.
It enables teams to work together on projects, enhancing innovation and efficiency.
It allows for immediate insights and data-driven decision-making.
In contrast, here are the benefits you can experience with Snowflake:
It is ideal for organizations focused on business intelligence and analytics.
It helps with storage and the compute resources can be scaled separately, ensuring optimal performance.
It efficiently handles large volumes of data without performance issues.
It is easy to use for both technical and non-technical users, promoting widespread adoption.
It offers a wide range of functionalities to support various industry needs.
Note: Visit their website to learn more about the pricing of Databricks and Snowflake.
Now, let’s compare each of the platforms based on various use cases/features.
Databricks vs. Snowflake: Comparison of Essential Features
When comparing essential features, several use cases highlight the differences between Databricks and Snowflake. Here are the top four factors that will provide clarity on each platform's strengths and capabilities:
1. Data Ingestion: Snowflake utilizes the ‘COPY INTO’ command for data loading, often relying on third-party tools for ingestion. In contrast, Databricks enables direct interaction with data in cloud storage, providing more flexibility in handling various data formats.
2. Data Transformation: Snowflake predominantly uses SQL for data transformations, while Databricks leverages Spark, allowing for more extensive customization and the ability to handle massive datasets effectively.
3. Machine Learning: Databricks boasts of a mature ecosystem for machine learning with features like MLflow and model serving. On the other hand, Snowflake is catching up with the introduction of Snowpark, allowing users to run machine learning models within its environment.
4. Data Governance: Snowflake provides extensive metadata and cost management features, while Databricks offers a robust data catalog through its Unity Catalog (it is still developing its cost management capabilities).
In a nutshell, both Databricks and Snowflake have carved their niches in the data cloud landscape, each with its unique capabilities. As both platforms continue to evolve and expand their feature sets, the above read will help businesses make informed decisions to optimize their data strategies and achieve greater insights.
Feel free to share this microblog with your network and connect with us at Nitor Infotech to elevate your business through cutting-edge technologies.
#data bricks#data warehouse#database warehousing#data lake#snowflake data#software development#snowflake pricing#snowflake#software engineering#blog#software services#artificial intelligence
0 notes
Text
Mastering Web Development: Setting Up VS Code for JavaScript Development
For modern web developers, Visual Studio Code (VS Code) is a powerful tool that enhances productivity and streamlines the coding process. If you’re focused on JavaScript development, setting up VS Code properly is crucial for an efficient workflow.
Why Choose VS Code? VS Code offers robust features, including an integrated terminal, version control, and extensive extensions. Its flexibility makes it ideal for JavaScript projects, whether you’re working on small scripts or large applications.
Setup Steps
Install VS Code: Download and install the latest version from the official website.
Configure Extensions: Enhance your JavaScript development experience with extensions like Prettier for code formatting and ESLint for code quality.
Customize Your Environment: Adjust settings and themes to suit your preferences. Configure the integrated terminal for seamless command-line access.
By optimizing your VS Code setup, you’ll improve your coding efficiency and enjoy a more productive development process. Embrace the full potential of your development environment and see how it transforms your workflow!
#wordpress speed optimization service#core web vitals optimization service#fix search engine optimization html code#setup vs code for javascript development#wordpress database optimization
0 notes
Text
Mobile App Expert Services in India
In the ever-evolving landscape of Mobile Applications services India, Nivedita stands tall as leading provider of cutting-edge solutions. This article delves into the myriad reasons why Nivedita has emerged as the go-to choice for businesses seeking unparalleled mobile application development services in India. The company’s dedication to crafting top-notch mobile solutions, embracing cutting-edge technologies, and fostering client success sets it apart as the best choice for businesses seeking a reliable and visionary mobile applications partner.
Power Your Success: Mobile App Services
Mobile App Experts India
At the core of Nivedita’s success lies its unwavering commitment to crafting exceptional mobile experiences. Nivedita’s mobile applications services in India are a testament to innovation and functionality, designed to seamlessly integrate with the dynamic needs of today’s businesses. We takes pride in offering tailored mobile solutions that cater to a diverse range of industries. Whether it’s healthcare, finance, e-commerce, or any other sector, Nivedita’s team of skilled developers ensures that each application is customized to meet the unique requirements of the client.
#Mobile App Development India#Android App Development Services#iOS App Development Company#Mobile App Deployment Solutions#E-commerce Mobile App Development#web development india#software development#digital marketing agency india#search engine marketing#database and hosting services
0 notes
Text
Automatically clean up data sets to prevent ‘garbage in, garbage out’ - Technology Org
New Post has been published on https://thedigitalinsider.com/automatically-clean-up-data-sets-to-prevent-garbage-in-garbage-out-technology-org/
Automatically clean up data sets to prevent ‘garbage in, garbage out’ - Technology Org
‘Garbage in, garbage out’ has become a winged expression for the concept that flawed input data will lead to flawed output data. Practical examples abound. If a dataset contains temperature readings in Celsius and Fahrenheit without proper conversion, any analysis based on that data will be flawed. If the input data for a gift recommender system contains errors in the age attribute of customers, it might accidentally suggest kids toys to grown-ups.
Illustration by Heyerlein via Unsplash, free license
At a time when more and more companies, organizations and governments base decisions on data analytics, it is highly important to ensure good and clean data sets. That is what Sebastian Schelter and his colleagues are working on. Schelter is assistant professor at of the Informatics Institute of the University of Amsterdam, working in the Intelligent Data Engineering Lab (INDElab). Academic work he published in 2018, when he was working at Amazon, presently powers some of Amazon’s data quality services. At UvA he is expanding that work.
What are the biggest problems with data sets?
‘Missing data is one big problem. Think of an Excel sheet where you have to fill in values in each cell, but some cells are empty. May be data got lost, may be data just wasn’t collected. That’s a very common problem. The second big problem is that some data are wrong. Let’s say you have data about the age of people and there appears to be somebody who is a thousand years old.
A third major problem with data sets is data integration errors, which arise from combining different data sets. Very often this leads to duplicates. Think of two companies that merge. They will have address data bases and may be the same address is spelled in slightly different ways: one database uses ‘street’ and the other one uses ‘st.’. Or the spelling might be different.
Finally, the fourth major problem is called ‘referential integrity’. If you have datasets that reference each other, you need to make sure that the referencing is done correctly. If a company has a dataset with billing data and a bank has a dataset with bank account numbers of their customers, you want a bank account number in the billing dataset to reference an existing bank account at that bank, otherwise it would reference something that does not exist. Often there are problems with references between two data sets.’
How does your research tackle these problems?
‘Data scientists spend a lot of their time cleaning up flawed data sets. The numbers vary, but surveys have shown that it’s up to eighty percent of their time. That’s a big waste of time and talent. To counter this, we have developed open source software, called Deequ. Instead of data scientists having to write a program that validates the data quality, they can just write down how their data should look like. For example, they can prescribe things like: ‘there shouldn’t be missing data in the column with social security numbers’ or ‘the values in the age-column shouldn’t be negative’. Then Deequ runs over the data in an efficient way and tells whether the test is passed or not. Often Deequ also shows the particular data records that violated the test.’
How is Deequ used in practice?
‘The original scientific paper was written when I was working at Amazon. Since then, the open source implementation of this work has become pretty popular for all kinds of applications in all kinds of domains. There is a Python-version which has more than four million downloads per month. After I left Amazon, the company built two cloud services based on Deequ, one of them called AWS Glue Data Quality. Amazon’s cloud is the most used cloud in the world, so many companies that use it, have access to our way of data cleaning.’
What is the current research you are doing to clean up data sets?
‘At the moment we are developing a way to measure data quality of streaming data in our ICAI-lab ‘AI for Retail’, cooperating with bol.com. Deequ was developed for data at rest, but many use cases have a continuous stream of data. The data might be too big to store, there might be privacy reasons for not storing them, or it might simply be too expensive to store the data. So, we built StreamDQ, which can run quality checks on streaming data. A big challenge is that you can’t spend much time on processing the data, otherwise everything will be slowed down too much. So, you can only do certain tests and sometimes you have to use approximations. We have a working prototype, and we are now evaluating it.’
Source: University of Amsterdam
You can offer your link to a page which is relevant to the topic of this post.
#ai#Amazon#Analysis#Analytics#applications#AWS#bases#cell#Cells#challenge#Cloud#cloud services#Companies#data#data analytics#data cleaning#Data Engineering#Data Integration#data quality#Database#datasets#domains#engineering#excel#excel sheet#Featured technology news#how#illustration#integration#it
0 notes
Text
Amazon Relation Database Service RDS Explained for Cloud Developers
Full Video Link - https://youtube.com/shorts/zBv6Tcw6zrU Hi, a new #video #tutorial on #amazonrds #aws #rds #relationaldatabaseservice is published on #codeonedigest #youtube channel. @java @awscloud @AWSCloudIndia @YouTube #youtube @codeonedig
Amazon Relational Database Service (Amazon RDS) is a collection of managed services that makes it simple to set up, operate, and scale relational databases in the cloud. You can choose from seven popular engines i.e., Amazon Aurora with MySQL & PostgreSQL compatibility, MySQL, MariaDB, PostgreSQL, Oracle, and SQL Server. It provides cost-efficient, resizable capacity for an industry-standard…
View On WordPress
#amazon rds access from outside#amazon rds aurora#amazon rds automated backup#amazon rds backup#amazon rds backup and restore#amazon rds guide#amazon rds snapshot export to s3#amazon rds vs aurora#amazon web services#aws#aws cloud#aws rds aurora tutorial#aws rds engine#aws rds explained#aws rds performance insights#aws rds tutorial#aws rds vs aurora#cloud computing#relational database#relational database management system#relational database service
1 note
·
View note
Text
i know everyone is really excited for the oblivion remake because i was too. oblivion was the first real video game i ever played when i was a kid, and is literally the reason i am a gamer today, but BDS has called for a microsoft boycott, and that includes anything made by bethesda.
this isn't just a "oh they have some obscure business partnerships in isr*el" or "oh they donate to this or that lobby" sort of boycott either, although those are important too. my tone is not meant to be flippant about them, but rather i want to emphasize the gravity of how microsoft directly and deliberately contributes to the palestinian death toll daily, in a way that is uniquely cruel and complicit.
microsoft has had a $35 million dollar contract with the isr*eli military since 2002. they provide cloud storage for surveillance data of gazan civillians, and an artificial intelligence program called a "mass assassination factory" to assist in planning and targeting their attacks, many of which are on civilians or involve mass civilian casualties.
microsoft's service agreements with the isr*eli military also includes the CPU responsible for the military's tech infrastructure, military intelligence units that develop spy technology used against palestinians and lebanese, the maintenance of the palestinian population registry that tracks and (illegally) limits the movement of palestinains in the west bank and gaza, their air force targeting database, and much more. they work closely with isr*eli military intelligence agencies on surveillance systems used to monitor palestians, provide specialized consulting, technical and engineering support, hosts training software for the IOF, provide financial support to organizations based in the illegally occupied west bank, and have repeatedly invested in isr*eli start ups specializing in war technology.
in 2020, internal and external pressure forced microsoft to pull out of its 74 million dollar investment in an isr*eli company that violated international law due to its use of facial recognition technology for military surveillance.
in 2021, microsoft signed a new, 3-year contract with the isr*eli ministry of defense worth $133 million dollars. the isr*eli military is microsoft's second largest military customer. the first? the united states.
you can read more (w/ sources) about microsoft's complicity here.
BDS asks us to boycott microsoft products whenever possible.
microsoft is directly complicit in countless isr*eli war crimes, and the money you provide them will further proliferate this violence. i know the oblivion remake was exciting, but please, consider the lives of palestinians above your own nostalgia. no one is free until everyone is free.
767 notes
·
View notes
Text
Due to issues with Jack’s fertility, the Fenton kids were both conceived using donor sperm. The donor they ended up receiving wasn’t anyone special; just a random chemical engineer from Gotham City by the name of Jack Napier
And perhaps things could have ended there. Perhaps in another reality, they did, and Danny never had to pay more than a passing thought to his genetic father.
But in this world, Jack Napier faced one extraordinarily bad day, and it broke him. All he could do in response was laugh at the utter meaningless of it all. And by the time his cackles subsided, the man he had once been was dead. In his place, only the Joker remained.
Yet despite how suddenly everything changed, neither of them knew the truth of things right away. The Joker had never even known he had a son, since the service didn’t inform people if their donations were used. And though Danny had known Napier’s name, he had no reason to pay enough attention to the man to notice his disappearance.
That ignorance couldn’t last forever, though.
All it took was a few medical tests from Danny before the automated database systems made the connection.
And that certainly didn’t go unnoticed by those in Gotham.
It looks like it’s time for Uncle J to pay his kid a surprise visit! Ooh, maybe he could break out the ol’ electroshock kit and bring back Joker Jr too, make it a proper family reunion!
#dp x dc#dpxdc#dc x dp#dcxdp#danny phantom x dc#danny phantom x dc crossover#dp x dc prompt#dpxdc prompt#dc x dp prompt#dcxdp prompt#dpxdc the joker#danny is the joker’s biological son#IMO danny definitely shouldn’t be overpowered in this#like i feel like this’d have way more potential for drama than comedy#having danny genuinely be on the back foot and have to work with his still-inimical rogues and/or the Bats to make it through#the joker has survived as long as he has for a *reason*#(well and plot armor because he’s popular but putting aside that)#he’s smart and sets up plans to push things in his favor#he may be at a disadvantage by not being on his home turf but he can still set up schemes
1K notes
·
View notes
Text
The reason you can’t buy a car is the same reason that your health insurer let hackers dox you

On July 14, I'm giving the closing keynote for the fifteenth HACKERS ON PLANET EARTH, in QUEENS, NY. Happy Bastille Day! On July 20, I'm appearing in CHICAGO at Exile in Bookville.
In 2017, Equifax suffered the worst data-breach in world history, leaking the deep, nonconsensual dossiers it had compiled on 148m Americans and 15m Britons, (and 19k Canadians) into the world, to form an immortal, undeletable reservoir of kompromat and premade identity-theft kits:
https://en.wikipedia.org/wiki/2017_Equifax_data_breach
Equifax knew the breach was coming. It wasn't just that their top execs liquidated their stock in Equifax before the announcement of the breach – it was also that they ignored years of increasingly urgent warnings from IT staff about the problems with their server security.
Things didn't improve after the breach. Indeed, the 2017 Equifax breach was the starting gun for a string of more breaches, because Equifax's servers didn't just have one fubared system – it was composed of pure, refined fubar. After one group of hackers breached the main Equifax system, other groups breached other Equifax systems, over and over, and over:
https://finance.yahoo.com/news/equifax-password-username-admin-lawsuit-201118316.html
Doesn't this remind you of Boeing? It reminds me of Boeing. The spectacular 737 Max failures in 2018 weren't the end of the scandal. They weren't even the scandal's start – they were the tipping point, the moment in which a long history of lethally defective planes "breached" from the world of aviation wonks and into the wider public consciousness:
https://en.wikipedia.org/wiki/List_of_accidents_and_incidents_involving_the_Boeing_737
Just like with Equifax, the 737 Max disasters tipped Boeing into a string of increasingly grim catastrophes. Each fresh disaster landed with the grim inevitability of your general contractor texting you that he's just opened up your ceiling and discovered that all your joists had rotted out – and that he won't be able to deal with that until he deals with the termites he found last week, and that they'll have to wait until he gets to the cracks in the foundation slab from the week before, and that those will have to wait until he gets to the asbestos he just discovered in the walls.
Drip, drip, drip, as you realize that the most expensive thing you own – which is also the thing you had hoped to shelter for the rest of your life – isn't even a teardown, it's just a pure liability. Even if you razed the structure, you couldn't start over, because the soil is full of PCBs. It's not a toxic asset, because it's not an asset. It's just toxic.
Equifax isn't just a company: it's infrastructure. It started out as an engine for racial, political and sexual discrimination, paying snoops to collect gossip from nosy neighbors, which was assembled into vast warehouses full of binders that told bank officers which loan applicants should be denied for being queer, or leftists, or, you know, Black:
https://jacobin.com/2017/09/equifax-retail-credit-company-discrimination-loans
This witch-hunts-as-a-service morphed into an official part of the economy, the backbone of the credit industry, with a license to secretly destroy your life with haphazardly assembled "facts" about your life that you had the most minimal, grudging right to appeal (or even see). Turns out there are a lot of customers for this kind of service, and the capital markets showered Equifax with the cash needed to buy almost all of its rivals, in mergers that were waved through by a generation of Reaganomics-sedated antitrust regulators.
There's a direct line from that acquisition spree to the Equifax breach(es). First of all, companies like Equifax were early adopters of technology. They're a database company, so they were the crash-test dummies for ever generation of database. These bug-riddled, heavily patched systems were overlaid with subsequent layers of new tech, with new defects to be patched and then overlaid with the next generation.
These systems are intrinsically fragile, because things fall apart at the seams, and these systems are all seams. They are tech-debt personified. Now, every kind of enterprise will eventually reach this state if it keeps going long enough, but the early digitizers are the bow-wave of that coming infopocalypse, both because they got there first and because the bottom tiers of their systems are composed of layers of punchcards and COBOL, crumbling under the geological stresses of seventy years of subsequent technology.
The single best account of this phenomenon is the British Library's postmortem of their ransomware attack, which is also in the running for "best hard-eyed assessment of how fucked things are":
https://www.bl.uk/home/british-library-cyber-incident-review-8-march-2024.pdf
There's a reason libraries, cities, insurance companies, and other giant institutions keep getting breached: they started accumulating tech debt before anyone else, so they've got more asbestos in the walls, more sagging joists, more foundation cracks and more termites.
That was the starting point for Equifax – a company with a massive tech debt that it would struggle to pay down under the most ideal circumstances.
Then, Equifax deliberately made this situation infinitely worse through a series of mergers in which it bought dozens of other companies that all had their own version of this problem, and duct-taped their failing, fucked up IT systems to its own. The more seams an IT system has, the more brittle and insecure it is. Equifax deliberately added so many seams that you need to be able to visualized additional spatial dimensions to grasp them – they had fractal seams.
But wait, there's more! The reason to merge with your competitors is to create a monopoly position, and the value of a monopoly position is that it makes a company too big to fail, which makes it too big to jail, which makes it too big to care. Each Equifax acquisition took a piece off the game board, making it that much harder to replace Equifax if it fucked up. That, in turn, made it harder to punish Equifax if it fucked up. And that meant that Equifax didn't have to care if it fucked up.
Which is why the increasingly desperate pleas for more resources to shore up Equifax's crumbling IT and security infrastructure went unheeded. Top management could see that they were steaming directly into an iceberg, but they also knew that they had a guaranteed spot on the lifeboats, and that someone else would be responsible for fishing the dead passengers out of the sea. Why turn the wheel?
That's what happened to Boeing, too: the company acquired new layers of technical complexity by merging with rivals (principally McDonnell-Douglas), and then starved the departments that would have to deal with that complexity because it was being managed by execs whose driving passion was to run a company that was too big to care. Those execs then added more complexity by chasing lower costs by firing unionized, competent, senior staff and replacing them with untrained scabs in jurisdictions chosen for their lax labor and environmental enforcement regimes.
(The biggest difference was that Boeing once had a useful, high-quality product, whereas Equifax started off as an irredeemably terrible, if efficient, discrimination machine, and grew to become an equally terrible, but also ferociously incompetent, enterprise.)
This is the American story of the past four decades: accumulate tech debt, merge to monopoly, exponentially compound your tech debt by combining barely functional IT systems. Every corporate behemoth is locked in a race between the eventual discovery of its irreparable structural defects and its ability to become so enmeshed in our lives that we have to assume the costs of fixing those defects. It's a contest between "too rotten to stand" and "too big to care."
Remember last February, when we all discovered that there was a company called Change Healthcare, and that they were key to processing virtually every prescription filled in America? Remember how we discovered this? Change was hacked, went down, ransomed, and no one could fill a scrip in America for more than a week, until they paid the hackers $22m in Bitcoin?
https://en.wikipedia.org/wiki/2024_Change_Healthcare_ransomware_attack
How did we end up with Change Healthcare as the linchpin of the entire American prescription system? Well, first Unitedhealthcare became the largest health insurer in America by buying all its competitors in a series of mergers that comatose antitrust regulators failed to block. Then it combined all those other companies' IT systems into a cosmic-scale dog's breakfast that barely ran. Then it bought Change and used its monopoly power to ensure that every Rx ran through Change's servers, which were part of that asbestos-filled, termite-infested, crack-foundationed, sag-joisted teardown. Then, it got hacked.
United's execs are the kind of execs on a relentless quest to be too big to care, and so they don't care. Which is why their they had to subsequently announce that they had suffered a breach that turned the complete medical histories of one third of Americans into immortal Darknet kompromat that is – even now – being combined with breach data from Equifax and force-fed to the slaves in Cambodia and Laos's pig-butchering factories:
https://www.cnn.com/2024/05/01/politics/data-stolen-healthcare-hack/index.html
Those slaves are beaten, tortured, and punitively raped in compounds to force them to drain the life's savings of everyone in Canada, Australia, Singapore, the UK and Europe. Remember that they are downstream of the forseeable, inevitable IT failures of companies that set out to be too big to care that this was going to happen.
Failures like Ticketmaster's, which flushed 500 million users' personal information into the identity-theft mills just last month. Ticketmaster, you'll recall, grew to its current scale through (you guessed it), a series of mergers en route to "too big to care" status, that resulted in its IT systems being combined with those of Ticketron, Live Nation, and dozens of others:
https://www.nytimes.com/2024/05/31/business/ticketmaster-hack-data-breach.html
But enough about that. Let's go car-shopping!
Good luck with that. There's a company you've never heard. It's called CDK Global. They provide "dealer management software." They are a monopolist. They got that way after being bought by a private equity fund called Brookfield. You can't complete a car purchase without their systems, and their systems have been hacked. No one can buy a car:
https://www.cnn.com/2024/06/27/business/cdk-global-cyber-attack-update/index.html
Writing for his BIG newsletter, Matt Stoller tells the all-too-familiar story of how CDK Global filled the walls of the nation's auto-dealers with the IT equivalent of termites and asbestos, and lays the blame where it belongs: with a legal and economics establishment that wanted it this way:
https://www.thebignewsletter.com/p/a-supreme-court-justice-is-why-you
The CDK story follows the Equifax/Boeing/Change Healthcare/Ticketmaster pattern, but with an important difference. As CDK was amassing its monopoly power, one of its execs, Dan McCray, told a competitor, Authenticom founder Steve Cottrell that if he didn't sell to CDK that he would "fucking destroy" Authenticom by illegally colluding with the number two dealer management company Reynolds.
Rather than selling out, Cottrell blew the whistle, using Cottrell's own words to convince a district court that CDK had violated antitrust law. The court agreed, and ordered CDK and Reynolds – who controlled 90% of the market – to continue to allow Authenticom to participate in the DMS market.
Dealers cheered this on: CDK/Reynolds had been steadily hiking prices, while ingesting dealer data and using it to gouge the dealers on additional services, while denying dealers access to their own data. The services that Authenticom provided for $35/month cost $735/month from CDK/Reynolds (they justified this price hike by saying they needed the additional funds to cover the costs of increased information security!).
CDK/Reynolds appealed the judgment to the 7th Circuit, where a panel of economists weighed in. As Stoller writes, this panel included monopoly's most notorious (and well-compensated) cheerleader, Frank Easterbrook, and the "legendary" Democrat Diane Wood. They argued for CDK/Reynolds, demanding that the court release them from their obligations to share the market with Authenticom:
https://caselaw.findlaw.com/court/us-7th-circuit/1879150.html
The 7th Circuit bought the argument, overturning the lower court and paving the way for the CDK/Reynolds monopoly, which is how we ended up with one company's objectively shitty IT systems interwoven into the sale of every car, which meant that when Russian hackers looked at that crosseyed, it split wide open, allowing them to halt auto sales nationwide. What happens next is a near-certainty: CDK will pay a multimillion dollar ransom, and the hackers will reward them by breaching the personal details of everyone who's ever bought a car, and the slaves in Cambodian pig-butchering compounds will get a fresh supply of kompromat.
But on the plus side, the need to pay these huge ransoms is key to ensuring liquidity in the cryptocurrency markets, because ransoms are now the only nondiscretionary liability that can only be settled in crypto:
https://locusmag.com/2022/09/cory-doctorow-moneylike/
When the 7th Circuit set up every American car owner to be pig-butchered, they cited one of the most important cases in antitrust history: the 2004 unanimous Supreme Court decision in Verizon v Trinko:
https://www.oyez.org/cases/2003/02-682
Trinko was a case about whether antitrust law could force Verizon, a telcoms monopolist, to share its lines with competitors, something it had been ordered to do and then cheated on. The decision was written by Antonin Scalia, and without it, Big Tech would never have been able to form. Scalia and Trinko gave us the modern, too-big-to-care versions of Google, Meta, Apple, Microsoft and the other tech baronies.
In his Trinko opinion, Scalia said that "possessing monopoly power" and "charging monopoly prices" was "not unlawful" – rather, it was "an important element of the free-market system." Scalia – writing on behalf of a unanimous court! – said that fighting monopolists "may lessen the incentive for the monopolist…to invest in those economically beneficial facilities."
In other words, in order to prevent monopolists from being too big to care, we have to let them have monopolies. No wonder Trinko is the Zelig of shitty antitrust rulings, from the decision to dismiss the antitrust case against Facebook and Apple's defense in its own ongoing case:
https://www.ftc.gov/system/files/documents/cases/073_2021.06.28_mtd_order_memo.pdf
Trinko is the origin node of too big to care. It's the reason that our whole economy is now composed of "infrastructure" that is made of splitting seams, asbestos, termites and dry rot. It's the reason that the entire automotive sector became dependent on companies like Reynolds, whose billionaire owner intentionally and illegally destroyed evidence of his company's crimes, before going on to commit the largest tax fraud in American history:
https://www.wsj.com/articles/billionaire-robert-brockman-accused-of-biggest-tax-fraud-in-u-s-history-dies-at-81-11660226505
Trinko begs companies to become too big to care. It ensures that they will exponentially increase their IT debt while becoming structurally important to whole swathes of the US economy. It guarantees that they will underinvest in IT security. It is the soil in which pig butchering grew.
It's why you can't buy a car.
Now, I am fond of quoting Stein's Law at moments like this: "anything that can't go on forever will eventually stop." As Stoller writes, after two decades of unchallenged rule, Trinko is looking awfully shaky. It was substantially narrowed in 2023 by the 10th Circuit, which had been briefed by Biden's antitrust division:
https://law.justia.com/cases/federal/appellate-courts/ca10/22-1164/22-1164-2023-08-21.html
And the cases of 2024 have something going for them that Trinko lacked in 2004: evidence of what a fucking disaster Trinko is. The wrongness of Trinko is so increasingly undeniable that there's a chance it will be overturned.
But it won't go down easy. As Stoller writes, Trinko didn't emerge from a vacuum: the economic theories that underpinned it come from some of the heroes of orthodox economics, like Joseph Schumpeter, who is positively worshipped. Schumpeter was antitrust's OG hater, who wrote extensively that antitrust law didn't need to exist because any harmful monopoly would be overturned by an inevitable market process dictated by iron laws of economics.
Schumpeter wrote that monopolies could only be sustained by "alertness and energy" – that there would never be a monopoly so secure that its owner became too big to care. But he went further, insisting that the promise of attaining a monopoly was key to investment in great new things, because monopolists had the economic power that let them plan and execute great feats of innovation.
The idea that monopolies are benevolent dictators has pervaded our economic tale for decades. Even today, critics who deplore Facebook and Google do so on the basis that they do not wield their power wisely (say, to stamp out harassment or disinformation). When confronted with the possibility of breaking up these companies or replacing them with smaller platforms, those critics recoil, insisting that without Big Tech's scale, no one will ever have the power to accomplish their goals:
https://pluralistic.net/2023/07/18/urban-wildlife-interface/#combustible-walled-gardens
But they misunderstand the relationship between corporate power and corporate conduct. The reason corporations accumulate power is so that they can be insulated from the consequences of the harms they wreak upon the rest of us. They don't inflict those harms out of sadism: rather, they do so in order to externalize the costs of running a good system, reaping the profits of scale while we pay its costs.
The only reason to accumulate corporate power is to grow too big to care. Any corporation that amasses enough power that it need not care about us will not care about it. You can't fix Facebook by replacing Zuck with a good unelected social media czar with total power over billions of peoples' lives. We need to abolish Zuck, not fix Zuck.
Zuck is not exceptional: there were a million sociopaths whom investors would have funded to monopolistic dominance if he had balked. A monopoly like Facebook has a Zuck-shaped hole at the top of its org chart, and only someone Zuck-shaped will ever fit through that hole.
Our whole economy is now composed of companies with sociopath-shaped holes at the tops of their org chart. The reason these companies can only be run by sociopaths is the same reason that they have become infrastructure that is crumbling due to sociopathic neglect. The reckless disregard for the risk of combining companies is the source of the market power these companies accumulated, and the market power let them neglect their systems to the point of collapse.
This is the system that Schumpeter, and Easterbrook, and Wood, and Scalia – and the entire Supreme Court of 2004 – set out to make. The fact that you can't buy a car is a feature, not a bug. The pig-butcherers, wallowing in an ocean of breach data, are a feature, not a bug. The point of the system was what it did: create unimaginable wealth for a tiny cohort of the worst people on Earth without regard to the collapse this would provoke, or the plight of those of us trapped and suffocating in the rubble.
Support me this summer on the Clarion Write-A-Thon and help raise money for the Clarion Science Fiction and Fantasy Writers' Workshop!
If you'd like an essay-formatted version of this post to read or share, here's a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:
https://pluralistic.net/2024/06/28/dealer-management-software/#antonin-scalia-stole-your-car
Image: Cryteria (modified) https://commons.wikimedia.org/wiki/File:HAL9000.svg
CC BY 3.0 https://creativecommons.org/licenses/by/3.0/deed.en
#pluralistic#matt stoller#monopoly#automotive#trinko#antitrust#trustbusting#cdk global#brookfield#private equity#dms#dealer management software#blacksuit#infosec#Authenticom#Dan McCray#Steve Cottrell#Reynolds#frank easterbrook#schumpeter
996 notes
·
View notes
Text
Vittoria Elliott at Wired:
Elon Musk’s takeover of federal government infrastructure is ongoing, and at the center of things is a coterie of engineers who are barely out of—and in at least one case, purportedly still in—college. Most have connections to Musk and at least two have connections to Musk’s longtime associate Peter Thiel, a cofounder and chairman of the analytics firm and government contractor Palantir who has long expressed opposition to democracy. WIRED has identified six young men—all apparently between the ages of 19 and 24, according to public databases, their online presences, and other records—who have little to no government experience and are now playing critical roles in Musk’s so-called Department of Government Efficiency (DOGE) project, tasked by executive order with “modernizing Federal technology and software to maximize governmental efficiency and productivity.” The engineers all hold nebulous job titles within DOGE, and at least one appears to be working as a volunteer. The engineers are Akash Bobba, Edward Coristine, Luke Farritor, Gautier Cole Killian, Gavin Kliger, and Ethan Shaotran. None have responded to requests for comment from WIRED. Representatives from OPM, GSA, and DOGE did not respond to requests for comment. Already, Musk’s lackeys have taken control of the Office of Personnel Management (OPM) and General Services Administration (GSA), and have gained access to the Treasury Department’s payment system, potentially allowing him access to a vast range of sensitive information about tens of millions of citizens, businesses, and more. On Sunday, CNN reported that DOGE personnel attempted to improperly access classified information and security systems at the US Agency for International Development (USAID), and that top USAID security officials who thwarted the attempt were subsequently put on leave. The AP reported that DOGE personnel had indeed accessed classified material. “What we're seeing is unprecedented in that you have these actors who are not really public officials gaining access to the most sensitive data in government,” says Don Moynihan, a professor of public policy at the University of Michigan. “We really have very little eyes on what's going on. Congress has no ability to really intervene and monitor what's happening because these aren't really accountable public officials. So this feels like a hostile takeover of the machinery of governments by the richest man in the world.”
[...] “To the extent these individuals are exercising what would otherwise be relatively significant managerial control over two very large agencies that deal with very complex topics,” says Nick Bednar, a professor at University of Minnesota’s school of law, “it is very unlikely they have the expertise to understand either the law or the administrative needs that surround these agencies.” Sources tell WIRED that Bobba, Coristine, Farritor, and Shaotran all currently have working GSA emails and A-suite level clearance at the GSA, which means that they work out of the agency’s top floor and have access to all physical spaces and IT systems, according a source with knowledge of the GSA’s clearance protocols. The source, who spoke to WIRED on the condition of anonymity because they fear retaliation, says they worry that the new teams could bypass the regular security clearance protocols to access the agency’s sensitive compartmented information facility (SCIF), as the Trump administration has already granted temporary security clearances to unvetted people. This is in addition to Coristine and Bobba being listed as “experts” working at OPM. Bednar says that while staff can be loaned out between agencies for special projects or to work on issues that might cross agency lines, it’s not exactly common practice.
WIRED’s report on the 6 college-aged men between 19 and 24 that are shaping up DOGE in aiding and abetting in co-”President” Elon Musk’s technofascist takeover.
#Elon Musk#DOGE#Department of Government Efficiency#Trump Administration II#General Services Administration#Office of Personnel Management#Scott Bessent#USAID#Akash Bobba#Edward Coristine#Luke Farritor#Gautier Cole Killian#Gavin Kliger#Ethan Shaotran#Treasury Department#Musk Coup
65 notes
·
View notes
Text
What is Apache Kafka?
Apache Kafka is designed to handle real-time data feeds that provide a high-throughput, resilient, and scalable solution for processing and storing streams of records. The platform ensures durability by replicating data across multiple brokers in a cluster.
Kafka’s exceptional speed is coordinated by two key virtuosos:
Sequential I/O: Kafka addresses the perceived slowness of disks by brilliantly implementing Sequential I/O.
Zero Copy Principle: With this principle, Kafka avoids unnecessary data copies and reduces context switches between user and kernel modes, making it more efficient.
Why Kafka?
High performance: It has the capability to handle millions of messages per second
Non-volatile storage: It stores messages on disk, which enables durability and fault-tolerance
Distributed architecture: It can handle large amounts of data and scale horizontally by adding more machines to the cluster.
Learn more about Apache Kafka read our full blog - https://bit.ly/3urUEWF
#kafka#apache kafka#real time data analysis#real time database#nitor#nitor infotech services#nitor infotech#ascendion#software development#software engineering
0 notes
Text
The Story of KLogs: What happens when an Mechanical Engineer codes
Since i no longer work at Wearhouse Automation Startup (WAS for short) and havnt for many years i feel as though i should recount the tale of the most bonkers program i ever wrote, but we need to establish some background
WAS has its HQ very far away from the big customer site and i worked as a Field Service Engineer (FSE) on site. so i learned early on that if a problem needed to be solved fast, WE had to do it. we never got many updates on what was coming down the pipeline for us or what issues were being worked on. this made us very independent
As such, we got good at reading the robot logs ourselves. it took too much time to send the logs off to HQ for analysis and get back what the problem was. we can read. now GETTING the logs is another thing.
the early robots we cut our teeth on used 2.4 gHz wifi to communicate with FSE's so dumping the logs was as simple as pushing a button in a little application and it would spit out a txt file
later on our robots were upgraded to use a 2.4 mHz xbee radio to communicate with us. which was FUCKING SLOW. and log dumping became a much more tedious process. you had to connect, go to logging mode, and then the robot would vomit all the logs in the past 2 min OR the entirety of its memory bank (only 2 options) into a terminal window. you would then save the terminal window and open it in a text editor to read them. it could take up to 5 min to dump the entire log file and if you didnt dump fast enough, the ACK messages from the control server would fill up the logs and erase the error as the memory overwrote itself.
this missing logs problem was a Big Deal for software who now weren't getting every log from every error so a NEW method of saving logs was devised: the robot would just vomit the log data in real time over a DIFFERENT radio and we would save it to a KQL server. Thanks Daddy Microsoft.
now whats KQL you may be asking. why, its Microsofts very own SQL clone! its Kusto Query Language. never mind that the system uses a SQL database for daily operations. lets use this proprietary Microsoft thing because they are paying us
so yay, problem solved. we now never miss the logs. so how do we read them if they are split up line by line in a database? why with a query of course!
select * from tbLogs where RobotUID = [64CharLongString] and timestamp > [UnixTimeCode]
if this makes no sense to you, CONGRATULATIONS! you found the problem with this setup. Most FSE's were BAD at SQL which meant they didnt read logs anymore. If you do understand what the query is, CONGRATULATIONS! you see why this is Very Stupid.
You could not search by robot name. each robot had some arbitrarily assigned 64 character long string as an identifier and the timestamps were not set to local time. so you had run a lookup query to find the right name and do some time zone math to figure out what part of the logs to read. oh yeah and you had to download KQL to view them. so now we had both SQL and KQL on our computers
NOBODY in the field like this.
But Daddy Microsoft comes to the rescue
see we didnt JUST get KQL with part of that deal. we got the entire Microsoft cloud suite. and some people (like me) had been automating emails and stuff with Power Automate
This is Microsoft Power Automate. its Microsoft's version of Scratch but it has hooks into everything Microsoft. SharePoint, Teams, Outlook, Excel, it can integrate with all of it. i had been using it to send an email once a day with a list of all the robots in maintenance.
this gave me an idea
and i checked
and Power Automate had hooks for KQL
KLogs is actually short for Kusto Logs
I did not know how to program in Power Automate but damn it anything is better then writing KQL queries. so i got to work. and about 2 months later i had a BEHEMOTH of a Power Automate program. it lagged the webpage and many times when i tried to edit something my changes wouldn't take and i would have to click in very specific ways to ensure none of my variables were getting nuked. i dont think this was the intended purpose of Power Automate but this is what it did
the KLogger would watch a list of Teams chats and when someone typed "klogs" or pasted a copy of an ERROR mesage, it would spring into action.
it extracted the robot name from the message and timestamp from teams
it would lookup the name in the database to find the 64 long string UID and the location that robot was assigned too
it would reply to the message in teams saying it found a robot name and was getting logs
it would run a KQL query for the database and get the control system logs then export then into a CSV
it would save the CSV with the a .xls extension into a folder in ShairPoint (it would make a new folder for each day and location if it didnt have one already)
it would send ANOTHER message in teams with a LINK to the file in SharePoint
it would then enter a loop and scour the robot logs looking for the keyword ESTOP to find the error. (it did this because Kusto was SLOWER then the xbee radio and had up to a 10 min delay on syncing)
if it found the error, it would adjust its start and end timestamps to capture it and export the robot logs book-ended from the event by ~ 1 min. if it didnt, it would use the timestamp from when it was triggered +/- 5 min
it saved THOSE logs to SharePoint the same way as before
it would send ANOTHER message in teams with a link to the files
it would then check if the error was 1 of 3 very specific type of error with the camera. if it was it extracted the base64 jpg image saved in KQL as a byte array, do the math to convert it, and save that as a jpg in SharePoint (and link it of course)
and then it would terminate. and if it encountered an error anywhere in all of this, i had logic where it would spit back an error message in Teams as plaintext explaining what step failed and the program would close gracefully
I deployed it without asking anyone at one of the sites that was struggling. i just pointed it at their chat and turned it on. it had a bit of a rocky start (spammed chat) but man did the FSE's LOVE IT.
about 6 months later software deployed their answer to reading the logs: a webpage that acted as a nice GUI to the KQL database. much better then an CSV file
it still needed you to scroll though a big drop-down of robot names and enter a timestamp, but i noticed something. all that did was just change part of the URL and refresh the webpage
SO I MADE KLOGS 2 AND HAD IT GENERATE THE URL FOR YOU AND REPLY TO YOUR MESSAGE WITH IT. (it also still did the control server and jpg stuff). Theres a non-zero chance that klogs was still in use long after i left that job
now i dont recommend anyone use power automate like this. its clunky and weird. i had to make a variable called "Carrage Return" which was a blank text box that i pressed enter one time in because it was incapable of understanding /n or generating a new line in any capacity OTHER then this (thanks support forum).
im also sure this probably is giving the actual programmer people anxiety. imagine working at a company and then some rando you've never seen but only heard about as "the FSE whos really good at root causing stuff", in a department that does not do any coding, managed to, in their spare time, build and release and entire workflow piggybacking on your work without any oversight, code review, or permission.....and everyone liked it
#comet tales#lazee works#power automate#coding#software engineering#it was so funny whenever i visited HQ because i would go “hi my name is LazeeComet” and they would go “OH i've heard SO much about you”
64 notes
·
View notes
Note
I kind of can't get over you studying freakin' *robotics engineering.* Now I'm picturing you as the cool engineer on some sci-fi show. Whaddaya think? Star Trek? Something with giant robots? Maybe something grittier and more cyberpunk?
Maybe this is making too light of what you're working so hard for. Follow-up and/or substitute question: what's one of your dream projects or goals? What kind of robot(s) does Nyx most want to create, or work on?
🥹🤭 awww jeez don’t make me all fricken bashful on main i’m supposed to be tough !!!! CYBERPUNK PLEASE AAAAH
ok ok but to answer seriously, so far in my academic career i’ve worked on autonomous cars (1/10 scale autonomous racecars built essentially from scratch, a full size autonomous EV gokart, and i’ve even been fortunate enough to work on a fully autonomous indy 500 car…essentially the most advanced autonomous vehicle in the world. shit was one of the most mind blowing things i’ve ever been a part of), i’ve worked on some cool solar stuff (solar water heater for an orphanage in Tijuana, solar carport concept design for a nonprofit who wants to propose the idea of charging EV cars while they’re parked outside at work all day and distributing leftover power to the building), and my personal favorite was this project i did for a class i took where i designed an autonomous campus rideshare prototype on one of the 1/10 scale cars that was able to use facial recognition to identify the student who requested the ride and would essentially drive itself to the drop off location!
the last one was the most fun for me bc it was the first time i really got into the programming side of robotics. the car would have a starting location where it waited for ride requests, then when you submitted one it would drive itself to the location using LiDAR for collision avoidance and GPS path training. but the real doozy was the pickup stage bc i had to develop the facial recognition code to be able to scan the student’s face upon arrival with computer vision in live time with a camera that was attached to the car, and then match it to a photo of the student that was stored in a database, and then try to match the name of that student to the name of the student who requested the ride! if it matched, the screen would be like “rider verified! hop in!” and proceed to the next stage of driving itself to the drop off point. if it didn’t, the screen would read “sorry, we couldn’t verify your identity with the ride request” and drive itself back to the parking spot where it would wait for the next ride request! and the database of faces would basically come from your student ID picture, cuz the rideshare service would be around campus for students only!
god it was so fucking challenging but i had so much fun with it. so uhhhh to avoid writing 18 more paragraphs about all the cool robotics shit that i’m obsessed with, i would say my dream project is definitely something that combines machine learning software, robotics operating systems programming, and mechanical design all in one. something that could take the technological advancements that we have made and turn it into something useful for everyone. fuck i’m such a nerd i’m sorry i did NOT mean to go on like this ahahahahaha
#nyx.answers#uhhh anyways…#do u see why i crave venomous all-consuming tickle torture#i need my brain turned to mush by something that is not engineering in order to survive lmao
29 notes
·
View notes
Text
On Wednesday, acting FAA administrator Chris Rocheleau told the House Appropriations Committee that the Federal Aviation Administration plans to replace its aging air traffic control systems, which still rely on floppy disks and Windows 95 computers, Tom's Hardware reports. The agency has issued a Request for Information to gather proposals from companies willing to tackle the massive infrastructure overhaul.
“The whole idea is to replace the system. No more floppy disks or paper strips,” Rocheleau said during the committee hearing. US transportation secretary Sean Duffy called the project “the most important infrastructure project that we've had in this country for decades,” describing it as a bipartisan priority.
Most air traffic control towers and facilities across the US currently operate with technology that seems frozen in the 20th century, although that isn't necessarily a bad thing—when it works. Some controllers currently use paper strips to track aircraft movements and transfer data between systems using floppy disks, while their computers run Microsoft's Windows 95 operating system, which launched in 1995.
As Tom's Hardware notes, modernization of the system is broadly popular. Sheldon Jacobson, a University of Illinois professor who has studied risks in aviation, says that the system works remarkably well as is but that an upgrade is still critical, according to NPR. The aviation industry coalition Modern Skies has been pushing for ATC modernization and recently released an advertisement highlighting the outdated technology.
While the vintage systems may have inadvertently protected air traffic control from widespread outages like the CrowdStrike incident that disrupted modern computer systems globally in 2024, agency officials say 51 of the FAA's 138 systems are unsustainable due to outdated functionality and a lack of spare parts.
The FAA isn't alone in clinging to floppy disk technology. San Francisco's train control system still runs on DOS loaded from 5.25-inch floppy disks, with upgrades not expected until 2030 due to budget constraints. Japan has also struggled in recent years to modernize government record systems that use floppy disks.
If It Ain’t Broke?
Modernizing the air traffic control system presents engineering challenges that extend far beyond simply installing newer computers. Unlike typical IT upgrades, ATC systems must maintain continuous 24/7 operation, because shutting down facilities for maintenance could compromise aviation safety.
This uptime requirement eliminates the possibility of traditional system replacement approaches where old hardware gets swapped out during scheduled downtime. The replacement systems must also meet security requirements to resist cyberattacks. A successful breach of air traffic control infrastructure could paralyze national aviation networks, resulting in cascading effects that impact transportation, commerce, and emergency services.
And yet not everyone is convinced the planned massive overhaul will achieve the desired effects. In an NPR report on the issue, aviation industry analyst Robert W. Mann Jr. expressed skepticism about whether new systems will actually materialize.
“This has been the same mantra for the past 30 years. Give them more money. They'll build the new system. It'll work better, work harder,” Mann told NPR. “And we've been doing that for well over 30 years now, and we've gotten the same results.”
Still, recent failures have confirmed some vulnerabilities in the aging system. A January 2023 outage in the FAA's Notice to Airmen system forced the grounding of every flight nationwide for more than two hours. Experts blamed the failure on aging infrastructure and damaged database files, with sources telling CNN at the time that budget constraints had repeatedly delayed needed tech refreshes. More recently, radar and communication outages led to hundreds of delays and cancellations at Newark Liberty International Airport, with a wiring failure being cited as one cause.
The US Transportation Department has set a four-year timeline for completing the modernization project, though industry experts question whether this schedule is realistic given the scope and complexity involved. According to the NPR report, Jacobson thinks the administration's announced timeline is “wildly optimistic,” especially given the absence of a price tag for the sweeping plan.
The FAA has announced several “Industry Days” where companies can present their technologies and proposals to department officials. The White House has not yet disclosed the expected cost of the modernization effort. Despite the financial uncertainty, Duffy emphasized the perceived urgency of the project: “Everyone agrees—this is nonpartisan. Everyone knows we have to do it.”
This story originally appeared on Ars Technica.
14 notes
·
View notes
Text
I doubt I have the reach to make this work, but...
I am officially offering freelance services as a programmer, and for anyone who contacts me via Tumblr I'll do a "pay what you can" model. I need the money, the practice, the contacts, and the "exposure". (*gag*) I like the people here and this way everyone can get what they need.
Officially I do back-end work but I've had my hand in a lot of things. I've been working as a software engineer for about 5 years, and I am about to earn my Masters degree. (And yet I'm still broke...) I trained as a data scientist but I'm working as an engineer.
So if you need a database designed/built/reorganized, or a simple phone/web app, or a website with e-commerce tools, or a small AI model, or a data analysis/visualization for a presentation/proposal, let's help each other out!
#services#selling#software engineering services#pay what you can#make me an offer#i need the money and i like doing this so lets work together#also did i mention im super broke#support a trans person#please reblog
9 notes
·
View notes
Text
could I interest you in a search engine in these trying times?
11 notes
·
View notes